1,124 research outputs found

    Improving Detectors Using Entangling Quantum Copiers

    Get PDF
    We present a detection scheme which using imperfect detectors, and imperfect quantum copying machines (which entangle the copies), allows one to extract more information from an incoming signal, than with the imperfect detectors alone.Comment: 4 pages, 2 figures, REVTeX, to be published in Phys. Rev.

    Stably reflexive modules and a lemma of Knudsen

    Full text link
    In his fundamental work on the stack of stable n-pointed genus g curves, Finn F. Knudsen introduced the concept of a stably reflexive module in order to prove a key technical lemma. We propose an alternative definition and generalise the results in the appendix to his article. Then we give a `coordinate free' generalisation of his lemma, generalise a construction used in Knudsen's proof concerning versal families of pointed algebras, and show that Knudsen's stabilisation construction works for plane curve singularities. In addition we prove approximation theorems generalising Cohen-Macaulay approximation with stably reflexive modules in flat families. The generalisation is not covered (even in the closed fibres) by the Auslander-Buchweitz axioms.Comment: 27 pages. The statement in Thm. 6.1 (iv) has been corrected. Many proofs have been expanded. A few minor changes in some of the statements. Comments and an example added. To appear in J. Algebr

    Equivalent efficiency of a simulated photon-number detector

    Get PDF
    Homodyne detection is considered as a way to improve the efficiency of communication near the single-photon level. The current lack of commercially available {\it infrared} photon-number detectors significantly reduces the mutual information accessible in such a communication channel. We consider simulating direct detection via homodyne detection. We find that our particular simulated direct detection strategy could provide limited improvement in the classical information transfer. However, we argue that homodyne detectors (and a polynomial number of linear optical elements) cannot simulate photocounters arbitrarily well, since otherwise the exponential gap between quantum and classical computers would vanish.Comment: 4 pages, 4 figure

    Critical Noise Levels for LDPC decoding

    Get PDF
    We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator (\cM), rather than on the weight enumerator (\cW) employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.Comment: 9 pages, 5 figure

    Quantum Stabilizer Codes and Classical Linear Codes

    Full text link
    We show that within any quantum stabilizer code there lurks a classical binary linear code with similar error-correcting capabilities, thereby demonstrating new connections between quantum codes and classical codes. Using this result -- which applies to degenerate as well as nondegenerate codes -- previously established necessary conditions for classical linear codes can be easily translated into necessary conditions for quantum stabilizer codes. Examples of specific consequences are: for a quantum channel subject to a delta-fraction of errors, the best asymptotic capacity attainable by any stabilizer code cannot exceed H(1/2 + sqrt(2*delta*(1-2*delta))); and, for the depolarizing channel with fidelity parameter delta, the best asymptotic capacity attainable by any stabilizer code cannot exceed 1-H(delta).Comment: 17 pages, ReVTeX, with two figure

    Statistical mechanics of lossy data compression using a non-monotonic perceptron

    Full text link
    The performance of a lossy data compression scheme for uniformly biased Boolean messages is investigated via methods of statistical mechanics. Inspired by a formal similarity to the storage capacity problem in the research of neural networks, we utilize a perceptron of which the transfer function is appropriately designed in order to compress and decode the messages. Employing the replica method, we analytically show that our scheme can achieve the optimal performance known in the framework of lossy compression in most cases when the code length becomes infinity. The validity of the obtained results is numerically confirmed.Comment: 9 pages, 5 figures, Physical Review

    Thouless-Anderson-Palmer Approach for Lossy Compression

    Full text link
    We study an ill-posed linear inverse problem, where a binary sequence will be reproduced using a sparce matrix. According to the previous study, this model can theoretically provide an optimal compression scheme for an arbitrary distortion level, though the encoding procedure remains an NP-complete problem. In this paper, we focus on the consistency condition for a dynamics model of Markov-type to derive an iterative algorithm, following the steps of Thouless-Anderson-Palmer's. Numerical results show that the algorithm can empirically saturate the theoretical limit for the sparse construction of our codes, which also is very close to the rate-distortion function.Comment: 10 pages, 3 figure

    Skeletal muscle carnitine metabolism during intense exercise in human volunteers

    Get PDF
    Increasing skeletal muscle carnitine content enhances PDC flux during 30 minutes of continuous exercise at 80% Wmax, reducing reliance on non-mitochondrial ATP production and improving work output. These studies in healthy volunteers evaluated a carnitine feeding strategy that did not rely on the high carbohydrate load previously used, then investigated whether manipulating muscle carnitine could alter the adaptations to a period of submaximal high-intensity intermittent training (HIT). The rate of orally ingested 2H3-carnitine uptake into skeletal muscle was directly quantified for the first time in vivo and increased 5-fold following ingestion of an 80g carbohydrate formulation. This positive forearm carnitine balance was entirely blunted when the carbohydrate load was supplemented with 40g of whey protein, suggesting a novel antagonisation of insulin-stimulated muscle carnitine transport by amino acids. Skeletal muscle biopsy sampling demonstrated minimal acetylcarnitine accumulation and non-mitochondrial ATP production during single-leg knee extension at 85% Wmax, suggesting that PDC flux does not limit oxidative ATP production under these conditions. Conversely, PDC flux declined over repeated bouts of cycling at 100% Wmax, as evidenced by greater non-mitochondrial ATP production in the face of similar acetylcarnitine accumulation. This suggested that muscle carnitine availability could influence oxidative ATP delivery during submaximal HIT. Manipulation of muscle carnitine content by daily carnitine/carbohydrate feeding elevated free carnitine availability and maintained PDC flux during repeated bouts of intense exercise. However, profound improvements in oxidative ATP delivery in response to HIT eclipsed any effect of this carnitine-mediated increase in PDC flux on non-mitochondrial ATP production and indeed, carnitine supplementation did not potentiate any increases in exercise capacity above submaximal HIT alone. These novel data advance our understanding of muscle carnitine transport and the interplay between carnitine metabolism, PDC flux and non-mitochondrial ATP production during intense exercise, having important implications for the development of nutritional and exercise prescription strategies to enhance human performance and health

    On the existence of 0/1 polytopes with high semidefinite extension complexity

    Full text link
    In Rothvo\ss{} it was shown that there exists a 0/1 polytope (a polytope whose vertices are in \{0,1\}^{n}) such that any higher-dimensional polytope projecting to it must have 2^{\Omega(n)} facets, i.e., its linear extension complexity is exponential. The question whether there exists a 0/1 polytope with high PSD extension complexity was left open. We answer this question in the affirmative by showing that there is a 0/1 polytope such that any spectrahedron projecting to it must be the intersection of a semidefinite cone of dimension~2^{\Omega(n)} and an affine space. Our proof relies on a new technique to rescale semidefinite factorizations
    corecore